Confidential Computing

From Kicksecure
< Dev
Jump to navigation Jump to search

Confidential computing is an advanced security technology that protects data while it's in use, complementing existing protections for data at rest and in transit. The goal is to isolate sensitive data from unauthorized access, even from the cloud provider or system administrators.

Introduction[edit]

Confidential computing is an advanced security technology that protects data while it's in use, complementing existing protections for data at rest and in transit. The goal is to isolate sensitive data from unauthorized access, even from the cloud provider or system administrators.

One use case of confidential computing are virtual machines that can be trusted to remain uncompromised even in the event the cloud provider is malicious. Confidential computing is difficult because, as per the development status of most operating systems, a malicious host machine is usually able to control the environment, including all VMs in its entirety.

There are two primary things that need to be shielded against in order for confidential computing to work: compromise of the host operating system/hypervisor, and compromise of the host hardware. Three separate methods must be used in combination to defend against these threats:

  • Encryption of virtual machine RAM and both physical and virtual machine disk. This is required to prevent data compromise in the event of an attacker gaining physical access to the hardware.
  • Isolation of host and guest kernels. A properly configured host must NOT be able to freely inspect virtual machine RAM or inject malicious commands into virtual machines.
  • Remote attestation. A user must be able to verify that the host is properly configured without having to trust the host.

The following related projects provide useful features towards confidential computing:

Cloud Server Permission Goals[edit]

The cloud server administrator should have the capability:

  • to commission access for new customers
  • to remove access for customers (such as on non-payment)

The cloud server administrator should have the capability to:

  • read hard drive or RAM contents of the customer.
  • observe, manipulate software execution of the customer.

Treat Model[edit]

  • 1) random cloud employee cold boot attacking random machines -> 97% RamCrypt still better than nothing
  • 2) sophisticated direct attack against specific server -> 97% RamCrypt insufficient
  • not worry about intentional backdoor in Intel/AMD because then should not use CPU.
    • With this in mind, Intel TME-MK might be more worthwhile than RamCrypt. Except if we assume, that Intel TME-MK might be buggy. But then RamCrypt is buggy for sure due to the 3% unencrypted issue.

Implementation Goals[edit]

  • If at all possible, the host OS should be remotely attestable. When the host OS is able to be freely controlled by an attacker, it allows for advanced attacks such as MemBuster to be used to carry out side-channel attacks against transparently encrypted memory. (https://www.usenix.org/conference/usenixsecurity20/presentation/lee-dayeolarchive.org)
  • If at all possible, the hardware state of the host machine should be remotely attestable. Otherwise a malicious PCIe or other hardware device could be used to attack the system in various ways. It may be difficult to tell for sure if a device is what it says it is, but if that hurdle can be overcome, something such as System Transparency could be used to compare the system's hardware state to a known-good state before permitting the host OS to boot.
  • Individual guest operating systems must be remotely attestable.
  • The RAM of virtual machines should be transparently encrypted by the host's kernel, using a key that is not saved to RAM.
    • If this is impossible, enabling the secure encryption of the data of individual processes may be an acceptable workaround.
  • The disk contents of virtual machines must be transparently encrypted.
  • Ideally all software based. Not using hardware based RAM encryption and/or TPM. If that is at all possible. TODO: research

Components[edit]

Full Disk Encryption Key Proetction from Cold Boot Attacks[edit]

TRESOR[edit]

  • https://www.cs1.tf.fau.de/research/system-security-group/tresor-trevisor-armored/archive.org
  • Provides full disk encryption capabilities using an encryption key that is stored only within CPU registers. Uses x86-64 debug registers.
  • TRESOR is actually more performant than standard AES encryption.
  • When used on the host machine, it renders it impossible for a cold-boot attack to compromise disk encryption keys without being able to compromise the CPU's memory itself.
  • Similar advantages are conferred on guest VMs if their CPU registers are encrypted when saved to RAM.
  • Not yet present in the Linux kernel.

Loop-Amnesia[edit]

  • https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=e5f940decaa589f3b2030429f48739281839e4d8archive.org
  • Similar to TRESOR, provids full disk encryption.
  • Only saves disk encryption keys in RAM in encrypted form. A "master key" is used to decrypt the encryption keys in CPU registers and never allows them to touch RAM unencrypted. Uses performance counter MSR registers.
  • Does not use CPU-accelerated AES instructions.
  • Not yet present in the Linux kernel.
  • Slows down disk I/O performance by a factor of 2.
  • Apparently will destroy any filesystems it protects if the system is put into suspend and then resumed. See warnings at https://patricksimmons.us/amnesia.htmlarchive.org. TRESOR avoids this by re-prompting for an FDE passphrase on resume.
  • Not yet present in the Linux kernel.
  • Mentions non-maskable interrupts (NMIs) as a potential attack vector to get disk decryption state flushed to RAM via a carefully timed interrupt injection. This is a real danger in a cloud provider scenario. The paper mentions that the NMI handler for Linux could be modified to wipe all general purpose register contents from RAM after being invoked in order to reduce the likelihood of this causing a problem. This wouldn't entirely mitigate the attack however and the use of a DDR interposer could be used to render the subsequent wiping of RAM futile. This is probably an effective attack against TRESOR too.

RAM Encryption[edit]

Intel[edit]

Intel TME-MK[edit]

Total Memory Encryption - Multi-Key (TME-MK)

Intel has introduced Intel® Total Memory Encryption - Multi-Key (Intel® TME-MK), a new memory encryption technology, on 3rd generation Intel® Xeon® Scalable processors (formerly code named Ice Lake).https://www.intel.com/content/www/us/en/developer/articles/news/runtime-encryption-of-memory-with-intel-tme-mk.htmlarchive.org

  • Provides granular, user-controllable full memory encryption. Multiple keys can be used at once, allowing a hypervisor to use one key for the host and different keys for each guest. The performance impact of using TME-MK is negligible (less than 2%).
    • Allows the use of tenant-provided encryption keys, which theoretically should prevent an attacker with access to Intel from compromising the keys.
    • Does not attempt to enforce guest/hypervisor isolation; the hypervisor must be trusted.
    • Only supports 128-bit AES encryption. Does not support 256-bit AES encryption.
  • GNOME shows status of RAM encryption: https://www.reddit.com/r/gnome/comments/12lc5l4/how_do_i_fix_the_encrypted_ram_status_in_device/archive.org

Check if TME is available, enabled.

grep -i tme /proc/cpuinfo

sudo dmesg | grep -i x86/tme

Example output in dmesg might be: [1]

x86/tme: enabled by BIOS
x86/tme: Unknown policy is active: 0x2
x86/mktme: No known encryption algorithm is supported: 0x4
x86/mktme: enabled by BIOS
x86/mktme: 15 KeyIDs available

sudo fwupdmgr security

Intel® Hardware Shield on the Intel vPro® platform as they pertain to helping to protect system memory. It covers both software and hardware security capabilities. Specifically, this document provides in-depth information Intel® TME.https://www.intel.com/content/www/us/en/architecture-and-technology/vpro/hardware-shield/total-memory-encrpytion.htmlarchive.org

Intel TME-MK might only be available in combination with Intel vPro. However, Intel vPro comes often (or always?) packages with Intel ME and Intel AMT, which many users would rather not have inside their CPU. See Out-of-band Management Technology.

Intel TDX[edit]

  • https://cdrdv2.intel.com/v1/dl/getContent/690419archive.org
    • Provides a full suite of confidential computing technologies in one solution, including memory encryption, host/guest isolation, remote attestation, and both physical and software based attack mitigations.
    • The TDX module is FOSS under the MIT license and can be built, but only builds signed by Intel can be loaded by a TDX-supporting CPU, thus it is impossible to modify the TDX module and run it yourself. One can only verify that the binaries shipped by Intel correspond to their source code.
    • Confidential VMs running under TDX are known as Trust Domains (abbreviated TDs).
    • Considers the VMM emulator (such as QEMU) untrusted, thus device drivers in TDs are an attack surface.
    • Only allows remote attestation of the VMs, not the host machine.
    • Only supports 128-bit AES encryption for memory.
    • Relies on encryption keys generated by the CPU for most memory encryption tasks and for remote attestation. Thus, potentially compromisable by Intel.

Intel SGX[edit]

  • Find whitepaper and link here, Intel no longer provides the SGX product brief and Archive.org was down when I went to find it there.
    • Allows running limited security-sensitive code in an encrypted, remotely attestable enclave.
    • Code running in enclaves is very limited in capabilities, though this can be circumvented to some degree with the help of Gramine.
    • Cannot be used to encrypt an entire VM.
    • Relies on encryption keys controlled by Intel, thus potentially vulnerable to an attacker with access to Intel.

AMD[edit]

AMD SEV technologies[edit]

AMD SME[edit]

AMD TSME[edit]

Transparent SME (TSME) as the name implies is a stricter subset of SME that requires no software intervention. Under TSME, all memory pages are encrypted regardless of the C-bit value. TSME is designed for legacy OS and hypervisor software that cannot be modified. Note that when TSME is enabled, standard SME as well as SEV are still available. TSME and SME share a memory encryption key.https://en.wikichip.org/wiki/x86/sme#Transparent_SMEarchive.org

  • https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/white-papers/memory-encryption-white-paper.pdfarchive.org
    • Encrypts a machine's RAM contents transparently. It generates an encryption key at boot time known only to the CPU, then uses that key to encrypt anything written to RAM and decrypt anything read from RAM.
      • Available on some consumer systems (where it is branded as "Memory Guard").
      • Can be effective at stopping cold-boot attacks, thus it is recommended to enable TSME or Memory Guard if your hardware supports it.
      • TSME is not visible to the OS or applications running on the hardware, and thus does not provide any additional isolation between applications or system components.
      • Makes use of keys generated by AMD firmware and does not allow the user to specify the key to use. Thus, can potentially be compromised by AMD.

How to check if TSME is enabled?

sudo dmesg | grep SME

Expected output:

AMD Secure Memory Encryption (SME) active

AMD Infinity Guard[edit]

GD-183A: AMD Infinity Guard features vary by EPYC™ Processor generations and/or series. Infinity Guard security features must be enabled by server OEMs and/or Cloud Service Providers to operate. Check with your OEM or provider to confirm support of these features. Learn more about Infinity Guard at http://www.amd.com/en/products/processors/server/epyc/infinity-guard.htmlarchive.org.footnote 1 on https://www.amd.com/en/products/processors/server/epyc/infinity-guard.htmlarchive.org

  • Infinity Guard is a brand name for AMDs collection of hypervisor security features, including Secure Encrypted Virtualization technologies.
  • AMD Infinity Guard and AMD TSME (Transparent Secure Memory Encryption) are not the same, but #AMD TSME is a component of AMD Infinity Guard.
  • availability:
    • Best to confirm the feature status with the server provider before purchasing.
    • Some or all AMD EPYC CPUs have the TSME feature. Best to check the availability of the feature before purchasing the CPU or renting the server.
    • https://www.hetzner.com/news/new-amd-ryzen-7950-server/archive.org
      • AMD EPYC available from root server provider hetzner for 236 EUR / month
        • TODO: Does this encrypt the RAM and protects it from hypothetical compromise of hetzner?

RamCrypt[edit]

  • https://faui1-files.cs.fau.de/filepool/projects/ramcrypt/ramcrypt.pdfarchive.org
    • Provides memory encryption for individual processes running under a Linux kernel. Fully software-based, uses TRESOR for encryption of RAM. Due to being software-based, RamCrypt must store process memory in a decrypted form when that memory is being actively used. Otherwise, the software using that memory would be unable to understand the memory contents. This means that sensitive data is occasionally exposed in cleartext in system memory, though it is later re-encrypted when no longer in active use. RamCrypt therefore *mitigates* cold-boot attacks, but does not *prevent* them. RamCrypt also comes with a very substantial performance hit (at least 25%, sometimes much more).
      • It may be possible to use no-fill cache mode (a.k.a. cache-as-RAM) to mitigate RamCrypt's shortcomings in this area. It is not yet known if this is possible, and it is very likely that doing so would come with an even greater performance hit. See https://frozencache.blogspot.com/archive.org
        • PrivateCore vCage somehow manages to control CPU cache in a manner that doesn't harm performance too badly and does acheive the desired goal, so this may very well be possible. No-fill cache mode on Intel CPUs has an effect on L3 cache, so in theory if one managed the cache in a fully manual manner it could be possible to keep decrypted data present only in CPU cache.
      • Could be made to support 256-bit AES? The default implementation only supports 128-bit AES.
  • The "3% problem". 3% of memory were unencrypted.
    • At this level of threat model where we are considering a cold boot attack by the cloud server admin, unfortunately, 97% encrypted and 3% unencrypted means still a 100% loss.
      • The way to break this is my measuring the electricity to the RAM banks. There was a similar attack against ledger hardware wallet by researchers presenting at CCC (wallet.fail).
      • Or by using a malicious RAM bank that logs access to everything that goes into as a hardware proxy. Unknown if that exists yet but it seems quite easier to develop than breaking into the CPUs cache with a cold boot attack.

PrivateCore vCage[edit]

  • Provides similar security to Intel TME / AMD SME, but appears to use a software implementation rather than hardware CPU features.
  • Somehow keeps an entire Linux kernel and KVM hypervisor resident in L3 cache
  • Company was acquired by Facebook/Meta in 2014, website appears "dead" ever since
  • Technology was based on Linux and KVM, thus their source code modifications to Linux may be something that can be retrieved if it is possible to contact them?
    • No source code appears to have been published
    • No contact information was visible on the website
    • Facebook acquired PrivateCore and it's products have never been made available to the general public.

OpenPOWER[edit]

RISV-V[edit]

  • Good enough performance to consider using?

Kernel isolation[edit]

Use Case[edit]

Only needed if intending to use VMs.

In case of "root servers" / dedicated servers with only 1 customer, there is no need for for pKVM, Xen or similar.

pKVM[edit]

  • https://source.android.com/docs/core/virtualization/architecturearchive.org
  • https://lwn.net/Articles/836693/archive.org
    • Provides separation of host and guest operating systems on arm64 platforms (specifically Android). Uses ARM virtualization features to install a portion of the KVM hypervisor in a higher-privileged area than the rest of the kernel, thus preventing the kernel from accessing the memory of, or interfering with the execution of, VMs on the same physical device. Additionally, provides strong local and remote attestation features both within pKVM itself and with Verified Boot.
      • Guest operating systems are not assumed to be locked down, arbitrary code can be run in guests with pKVM. See https://source.android.com/docs/core/virtualization/securityarchive.org
      • Local attestation is provided by pKVM itself to prevent malicious VM tampering.
      • pKVM's security features are not guaranteed to work correctly if the device pKVM runs on is unlocked. Verified Boot can be used for remote attestation of the device to detect when this is the case (this does not use SafetyNet/Play Integrity).
      • Currently ARM64-specific; while some of the code is in the upstream Linux kernel, it is likely that some of it may be Android-specific and thus hard to use outside of the context of Android.
    • KVM private memory (https://lwn.net/Articles/890224/archive.org) Isolates guest memory from a malicious KVM hypervisor.

Xen[edit]

  • https://xenproject.orgarchive.org
  • A true type-1 hypervisor. Much smaller than the Linux kernel, currently at around 370K lines of code, thus has a dramatically lower attack surface and is much easier to trust than a full Linux kernel. While a privileged virtual machine (dom0) can *control* the hypervisor, it cannot inspect VM memory. Xen also provides guest (domU) protection from dom0 to some degree (this needs to be researched further). Additionally, it provides a hardware TPM-backed vTPM implementation. Used in Qubes OS.

KVM private memory[edit]

Gramine[edit]

  • https://github.com/gramineproject/graminearchive.org
    • Allows running unmodified Linux applications within SGX enclaves.
    • Works at the application level, not the VM level.
    • Cannot be used to remotely attest the host machine or the guest running the enclave, instead attempts to keep the application safe from the host and guest.
    • Used by Enclaive (https://github.com/enclaivearchive.org) to provide SGX-protected versions of various apps and programming languages.

Verified Boot[edit]

See Verified Boot.

non-root enforcement[edit]

Non-root enforcement limits the privileges and access rights of non-root users or processes, ensuring they can't make system-level changes or access sensitive parts of the system without proper authorization.

Nobody, not even the cloud server administrator should have access to root (administrative rights). This is because with root rights, the cloud server administrator could get access to sensitive data on the hard drive and/or RAM contents.

related: Dev/boot modes

Remote Attestation[edit]

What is Remote Attestation[edit]

  • Purpose: Remote attestation is used to prove to a remote party (usually a server or another device) that the software and hardware configuration of a system is in a known and trusted state. It provides an external entity with proof that the system is running trusted software, based on cryptographic measurements (such as hashes or signatures).
  • Process: A trusted platform module (TPM) or similar hardware records cryptographic measurements during the boot process. These measurements (hashes of boot components) are sent to a remote party, which compares them to a list of trusted values. If the measurements match the expected trusted state, the remote party can trust the system's integrity.
  • Examples:
    • In cloud computing, remote attestation is used to verify that a virtual machine (VM) or physical server is in a trusted state before allowing it to interact with sensitive data or perform specific operations.
    • On Android, the operating system or application ... see Device Attestation such as SafetyNet.

In an ideal situation, the keys used for remote attestation should be able to be set by a known-trusted entity, and cycled out without requiring hardware replacement. Raptor Engineering's FlexVer is capable of this. Unfortunately many remote attestation technologies used in amd64-family CPUs (Intel TDX, Intel SGX, AMD SEV) rely on a key that is hardcoded into the CPU and ultimately controlled by Intel. TPM chips also oftentimes rely on the security of a key written to them by the manufacturer. Should one of these keys ever become compromised, remote attestation with these technologies will no longer be trustworthy, and replacing the compromised key with a good one will require that every single affected CPU or TPM device be replaced with a new one containing a new key.

Intel Trusted eXecution Technology[edit]

Raptor FlexVer[edit]

Trusted Platform Module remote attestation[edit]

swtpm[edit]

  • https://github.com/stefanberger/swtpmarchive.org
  • Provides a virtual TPM for use in virtual machines. If used on a remotely attested, trusted host, allows remote attestation of individual virtual machines.
    • Xen has its own vTPM subsystem which makes this unnecessary for a Xen-based deployment.

Keylime[edit]

System Transparency[edit]

  • https://www.system-transparency.org/archive.org
  • https://mullvad.net/en/blog/2019/6/3/system-transparency-futurearchive.org
  • https://docs.system-transparency.org/st-1.1.0/docs/reference/stboot-system/archive.org
    • Uses a Linux Unified Kernel Image (UKI) as a bootloader for another OS.
    • Carefully checks the integrity of the OS being booted before allowing it to be loaded and run.
    • Tools are included for generating the bootloader executable and configuring it properly.
    • Potentially usable for verifying the hardware state of the host machine. Intel TXT could be used to load the stboot bootloader in a verified fashion, then stboot could verify that the system's hardware matches a known-good state before loading the real OS.
    • Does not seem to have remote attestation tools built in yet? Is the TPM supposed to handle this?
    • Requires that the bootloader itself be uncompromised at boot - TPM attestation or Intel TXT might be useful for this.

Miscellaneous useful technologies[edit]

Raptor LPC Guard[edit]

Microsoft SEAL[edit]

Constellation[edit]

  • https://docs.edgeless.systems/constellation/archive.org
    • Runs entire Kubernetes clusters inside confidential VMs, leveraging Intel TDX or AMD SEV along with remote attestation to keep the cluster secure
    • Orchestrates a number of other components critical to confidential computing such as networking and drive encryption, key management, etc. Basically takes modern confidential computing functionality and puts it together into a single solution
    • Open-source code under GNU Affero General Public License, prebuilt binaries are offered under a freemium licensing model
    • Relies heavily on hardware-based confidential computing features requiring silicon vendor trust
    • Depending on the cloud provider used, the cloud provider's hypervisor is considered trusted, greatly reducing security unless one can audit and remotely attest the hypervisor

Contrast[edit]

  • https://docs.edgeless.systems/contrast/archive.org
    • Runs individual Kubernetes containers inside a cluster managed and hosted by a third party.
    • Orchestrates a number of other components critical to confidential computing such as networking and drive encryption, key management, etc. Basically takes modern confidential computing functionality and puts it together into a single solution
    • Open-source code under GNU Affero General Public License, license of binaries is unclear
    • Relies heavily on hardware-based confidential computing features requiring silicon vendor trust

Continuum[edit]

  • https://docs.edgeless.systems/constellation/overview/confidential-kubernetesarchive.org
    • Runs AI code in a confidential VM, using sandboxing to make it more difficult for an AI code author to violate the privacy of the user interacting with the AI model.
    • Does not use homomorphic encryption, instead relying on sandboxing to prevent the AI code from doing anything that it's not supposed to.
    • Allows the user to remotely attest the confidential VMs to ensure they are trustworthy.

Missing Pieces[edit]

  • How does one ensure that the VM they uploaded to the cloud vendor is really the one they want to run? One possible idea is to upload pre-installed, fully encrypted VMs to the cloud vendor, which can be booted only by passing an encryption key to the VM at boot time using an end-to-end encrypted channel (SSH login to initramfs). The VM then would contain sha512 hashes for every static file in it, with those hashes protected by a digital signature. On first boot, the VM would run a self-check using these hashes and signature to ensure it had not been tampered with. If this check passed, the VM would then consider itself trusted and would set itself up for vTPM-powered remote attestation. Investigate to see if a technology that performs the same functions already exists or is even needed; consider creating it if it doesn't exist and is needed.
  • Does Xen support Intel TME-MK?
  • Is there a way to remotely attest more than just the OS?

Putting it all together[edit]

  • A cloud vendor uses Xen and a minimal dom0 to run cloud servers. tboot is used to ensure the hypervisor is secure.
  • The cloud vendor uses TRESOR for full disk encryption, storing the encryption key on a hardware authentication device, such that in order to boot the device, a trusted staff member must insert the key into the server at boot time. (With the use of Intel TME-MK, it might theoretically be possible to implement cold-boot proof disk encryption *without* needing TRESOR; however, this requires further investigation.)
  • The cloud vendor uses Intel TME-MK to encrypt the RAM of the host, preventing cold-boot attacks by a third party. At boot, the host generates an encryption key of its own and re-encrypts its memory with that new key, so as not to trust the processor-generated key used by default.
  • Remote users can remotely attest the cloud vendor's physical machines at will, ensuring they trust the Xen and dom0 code running on them.
  • Verified Boot and non-root enforcement attempt to mitigate tampering of the running system.
  • Users upload pre-installed, encrypted VM images to the cloud vendor, which are kept tamper-proof using something (see the "missing pieces" section above). Upon passing a self-check, the VM sets itself up for remote attestation using a hypervisor-provided vTPM.
  • Xen uses Intel TME-MK to encrypt the RAM of the running VM with a user-provided encryption key, preventing cold-boot attacks by either the vendor or a third party.
  • Users remotely attest running VMs to ensure they are trusted.
  • At this point, the user has a confidential, cold-boot-proof VM. The cloud vendor must be trusted to keep their Xen and dom0 software up-to-date so as to prevent vulnerabilities from allowing VM compromise. The cloud customer can verify remote cloud server software version numbers using remote attestation.

Rough time estimate to get TRESOR working in Linux: possibly around 80 hours of actual time invested (four hours a day, five days a week, for one month). This is a very rough estimate since actual time will depend on how receptive devs are to the ideas and how much effort is required to polish their implementation to a usable state.

Technologies investigated but not useful[edit]

  • Microsoft Pluton: Basically a fancy hardware TPM integrated directly into the CPU. Under Linux, it only provides TPM features. It is implemented as another "secure processor" similar to Intel ME. Beyond TPM-like functionality, Pluton also provides features somewhat similar to Google's SafetyNet, but only on Windows. https://gabrielsieben.tech/2022/07/25/the-power-of-microsoft-pluton-2/archive.org
  • HashiCrop
  • Azure Key Vault: Cloud-based, HSM-backed secret management system for applications. Claims to be unable to see customer-provided secrets but whether this is enforced by hardware or by policy is unclear. Not particularly useful for confidential computing.
  • Google Cloud Key Management: Very similar to Azure Key Vault but from Google rather than from Microsoft.
  • HashiCorp Vault: Server-based centralized application secret management system. Not directly related to confidential computing, utility in a confidential computing scenario is questionable. Would probably be useful for individuals running cloud services on confidential VMs.
  • Thales cloud security: Very vague umbrella term for a number of different "security" technologies provided by Thales. Does not appear directly related to confidential computing.
  • AWS Cryptographic Computing: Homomorphic encryption based technologies for processing data without decrypting it. Mostly used for machine learning purposes.
  • AWS Clean Rooms: Used for allowing companies to share data analysis information with each other without exposing sensitive info directly. Unrelated to confidential computing. Appears privacy-invasive.
  • Key Management Interoperability Protocol: Key management server software, conceptually similar in some ways to Hashicorp Vault. Not directly related to confidential computing, utility in a confidential computing scenario is questionable. Would probably be useful for individuals running cloud services on confidential VMs.

Discussions[edit]

Cloud Vendors[edit]

NOTE: This is not an endorsement or recommendation of any vendor or technology. Some of the vendors listed here have security concerns associated with them.

Microsoft[edit]

Google[edit]

IntegriCloud raptorengineering[edit]

  • https://www.integricloud.com/archive.org
    • Appears to be RAPTOR Engineering's cloud computing division.
    • Only provides firmware source code to "active users of the IntegriCloud™ platform, auditors for active users of the IntegriCloud™ platform, and security researchers."
    • Unclear whether physical auditing of the cloud servers is permitted, which is slightly strange since [[Raptor FlexVer relies on yourself or someone you trust auditing the servers from what I understood from the papers.

NOTE: The source packages required to build reproducible versions of our node firmware and software stack are only available to active users of the IntegriCloud™ platform, auditors for active users of the IntegriCloud™ platform, and security researchers. Please contact us at support@integricloud.com with your IntegriCloud™ username to request access to the source packages and build instructions.https://www.integricloud.com/content/base/software.htmlarchive.org

  • Asked by e-mail about this in October 2024 but did not receive an e-mail at time of writing.
Hello,

During our research on confidential computing [1], we came across IntegriCloud.

From an initial look, it seems your approach aligns well with our values in the security community, particularly around principles like security, Freedom Software, and transparency.

However, one statement on your website struck us as somewhat inconsistent with these values:

> NOTE: The source packages required to build reproducible versions of our node firmware and software stack are only available to active users of the IntegriCloud™ platform, auditors for active users of the IntegriCloud™ platform, and security researchers.

Why the restriction? Wouldn't it make more sense to host the source code publicly on a source control platform, such as Git, to ensure greater openness and collaboration?

Looking forward to your thoughts.

I would also like to post your reply in full, without redactions, on our website for transparency and to satisfy the curiosity of our readers.

Best regards,
Patrick

[1] https://www.kicksecure.com/wiki/Dev/confidential_computing
[2] https://www.integricloud.com/content/base/software.html

Enclaive[edit]

  • https://www.enclaive.io/archive.org
  • https://github.com/enclaivearchive.org
    • Provides software for managing confidential computing resources on other clouds.
    • Packages several applications with Gramine and Docker to allow them to run confidentially on otherwise untrusted VMs with Intel SGX.
    • Many of their open-source projects appear to not have seen much maintenance in the recent past (many repos have a last commit date of one year ago)

Homomorphic Encryption[edit]

Full process memory encryption research[edit]

This is research that has been done to implement a RamCrypt-based memory encryption strategy that keeps decrypted information in the CPU cache, allowing important data to only touch system RAM in an encrypted form.

No-fill cache mode[edit]

This is a CPU cache mode present on Intel CPUs that (among other things) prevents cache contents from being replaced when a read or write misses the cache and accesses system memory instead. If a memory range is in the cache, the memory range's cache mode is set to writeback, and then the cache is placed into no-fill mode, it will essentially "freeze" that location of memory in the system's cache, ensuring that read accesses to that area of memory only access the cache, and that write accesses only update the cache. Note however it is possible to explicitly instruct the CPU to flush cache contents to system RAM, and that the instruction that does this (wbinvd) must not be executed while sensitive data is present in the cache.

Memory paging[edit]

See https://wiki.osdev.org/Pagingarchive.org. In all 32-bit and 64-bit Intel-based systems (and really most non-embedded systems in general), memory accesses are virtualized and go through a memory management unit (MMU) that translates virtual addresses to physical addresses. This is used for a lot of things, including allowing the OS to implement a swapfile, but notably for our use one can mark a particular page of memory as being "not present" for basically any reason. RamCrypt takes advantage of this to mark an encrypted memory page as being "not present", so that any attempt to access that page triggers a page fault. This instructs the kernel to figure out why the page is marked as "not present" and fix it if possible. The kernel can then see that the page is encrypted, decrypt it, mark it as present, and then have the program attempt the memory access again.

If there is already code executing in the page fault handler to decrypt the RAM page, it should be possible to relocate it at the same time. In theory, it may be possible to freeze a relatively large portion of system RAM into cache using no-fill cache mode, then have the RamCrypt page fault handler decrypt RAM contents into the cached area. Then the page fault handler would change the address the page is located at to point to the decrypted page in cache, mark it as present, and then return control to the application (such as QEMU).

Cache as Instruction RAM[edit]

https://pete.akeo.ie/2011/08/ubrx-l2-cache-as-instruction-ram.htmlarchive.org Data loaded into cache will probably end up loaded into the L1 data cache first, which is problematic as the CPU cannot execute code in the L1 data cache. It has to be moved to the L1 instruction cache first. The UBRX project worked around this by loading executable code into the L1 data cache, then reading enough of some sort of data to eventually push the code out of L1 cache and into L2 cache. Then the CPU could be instructed to execute it. For memory pages containing executable code, this may be necessary to allow them to run after decryption. (It's unclear if this is actually still necessary - the data in the L1 cache should just be at an address, like any other form of data in memory, so it seems like the CPU should be able to figure out that it's dealing with "self-modifying" code and move the data to the L1 instruction cache by itself. Experimentation will be needed to determine if this is the case or not.)

BareMetal[edit]

https://github.com/ReturnInfinity/BareMetalarchive.org Very small operating system written mostly in assembly language for 64-bit Intel and AMD CPUs. It's small and relatively simple, thus would be useful for prototyping and benchmarking transparent RAM encryption with cache used as RAM. If a working implementation can be made here, then it is likely possible to implement the same sort of technology in Xen or Linux.

BitVisor[edit]

https://github.com/matsu/bitvisorarchive.org Small hypervisor meant to provide security features (such as transparent encryption) to a single guest OS. Could also potentially be used to prototype a RAM encryption + frozen cache implementation. Might be usable for a production implementation although this would require the use of nested virtualization, further slowing down the system.

Footnotes[edit]

We believe security software like Kicksecure needs to remain Open Source and independent. Would you help sustain and grow the project? Learn more about our 12 year success story and maybe DONATE!